perm filename ABST.PAP[P,JRA] blob sn#151968 filedate 1975-03-26 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00009 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	A 1. Sexprs as abstract objects
C00003 00003	ideas
C00008 00004	        The need for a language to do interactive programming
C00025 00005	Relation to other work
C00035 00006	two pronged attack on reliable programming
C00049 00007	gcd[xy] <= 
C00050 00008	
C00055 00009	For  example when I say  "Now I want to  write a recursive algorithm,
C00066 ENDMK
C⊗;
A 1. Sexprs as abstract objects
  2. representation: as dotted pairs, and binary trees
  3. recursion, and conditionals as control.
  4. sel, const, and recog as d.s. manipulators

B.1. sequences as abstract objects
  2. sel, const, recog
  3. representation as sexprs
       map data= seq to lists
       map sel, const, and recog
       I/O
  4. beginnings of a.d.s. and  importance of abstractions


C.1. examples of diff and  value
      efficacy of abstraction
  2. stepwise refinement and top-down
     compare with bottom up
ideas
abstraction is slight misnomer since implies moving away from specific,
whereas being specific in first place was wrong.
c.f. "math is abstraction from fortran or 360"


informal-intuitive  → informal d.s. lang
    ↓
mathematical
    ↓			  ↓
numerical math
    ↓
efficiency&          storage struct.
 mach. lang      →   mach. lang


note that in the above there is no need for a physical machine
ideas are as accessible to ancients as mathematics, but it took
computers to bring realization. this might have something to
to with the complexity of the resulting computations(symbolic).

must get away from the coder mentality; compare Dijkstra's comments on
clever tricks.

abstraction is best seen in the d.s.-non numerical setting since
one of the points of traditional math was to strip structure out of
problem domain and put it into mathematical functions.

why is important: not just theory but must show that this way of thinking
is beneficial. like selling any product: look, smell, feel better,make
you more desireable.

no good lisp program uses car-cdr

compare math and d.s. as disciplines: look at machine independence

good example of power of abstraction in understanding of existing 
construct: octal dump of Fortran prog. in execution 
=> discover intructions and data by abstraction (recoding to opcodes)
=> discover original Fortran  program by abstraction (decompiling)
  !! its difficult !!
So too in programming: the more machine (representation) dependent
the program is, the more difficult it is to understand. Thus the higher
representation we use to express our algorithm and data, the better.


on programming:
 1. its difficult!
 2. bugs conception
         cleverness

 3. d.s. programming-very simpl programs on complex data

 4. how to make better programmers


abstraction as a basis for a programmer's assistant
 what is required
  1. proper education and understanding of abstraction
       majority of programmers don't understand abstraction

  2. a programming environment which supports abstraction
       a notation for abstraction
       a system with minimal aids on checking consistency of assumptions
         where possible
       program manipulators
	 the problem is not just with the language. it's in the process
	 of discovery of the program.

history:
bottom up machine=>assembly=>hll
interactive programming just getting out; historical accident
that it is based on op. systems. Economics said too expensive to
turn machine over to programmer; thus operating systems and then
interactive programs, finally  int. programming. But in actuality
the design of proper facility is distinct ptoblem from that of 
multiple user. since should think of how to make a good system
for user before think about how to make a system for many. What ususally
happened was that much time spent on system-programming of operating
system and almost none spent on user facilities.

compare language design and operating systems--protection


general feel: looks like abstract data structures will take care of themselves
in the development phase of the programming.
        The need for a language to do interactive programming

         "  The  comment was  made  by  an  experienced
         programmer   ...,  that   the   much  maligned
         tendency to start  coding too  early may be  a
         symptom of  wanting to write  something formal
         but  lacking any alternative language in which
         to state the partially thought-out ideas!"


This should allow a programmer to formulate, modify, test and run
his program on-line.  Those who are familar with the editor
E would rather compose text on-line rather than write out our ideas
on paper and then repeatedly rewrite sections  until an acceptable
product is arrived at.  We should begin thinking of analogous systems
for the professional programmer.

Current editors, like E, are too text oriented. Note that E works as well
for document preparation as for program preparation, even though
the tasks are quite distinct.  Few of us still think in a text-oriented
card image fashion when describing programs. We think in terms of
the structure of either the program or the data. We we propose is to
give critical examination to the process of program construction
with the aim of producing a language for constructing algorithms.

The language is not to be a programming laanguage and neither is the
target language. The language is to be a command language for construction
and manipulation of programs and the output from the session is to be
the specification of an algorithm, not necessarily in a programming language
but in a specification language whose components are rich enough that
a very large class of data-structure manipulating algorithms can be
naturally described.

Inherent in this approach to program construction is a bias toward
a particular programming style. The system will reinforce (enforce)
the policies of abstraction and elaboration as programming techniques.

Part of the mechanisms involved in responding to programming commands
may be requests for correctness, but the primary concern will be an 
attempt to show that a programming style plus the mechanisms to
support the pedagogy can make a significant contribution to the
quest for reliable software. The success of such an experiment is easy to
measure: Are programmers who are exposed to such an education
better than their counterparts, trained in the usual methods?
Are their programs easier to read, modify, and debug. Can they
generate their programs faster?

These ideas are closely related to
the ideas of "structured programming". But it is the ideas we are after 
as espoused by Dijkstra and implicit in the papers of McCarthy;
we are not arguing syntax or  the efficacy of goto's.
Our emphasis is in the "-ing" of "structured programming", for it is
in the act of creation that the structuring lies, not necessarily in the end
product. Indeed, in the machine's memory, all programs look alike.
The essence of the ideas is that we should program in "levels of
abstraction"; when we are satified with the correctness and adequacy
of a specific level of abstraction, we can begin elaboration of
subsidiary levels. What we are proposing is that the description of
operations and data structures on any level can best be done with a
simple command language, and the effect of the commands is to
generate, modify -- generally manipulate-- pieces of algorithms
and abstract data structures  in a specification language, rahter than
a specific programming language. 

One may certainly say that such a specification language is really
a simple (and most likely inadequate) programming language, since
the output will be an abstract algorithm. But the point is to
develop a notation for describing algorithms and to develop techniques
for reasoning about such programming abstractions. Just as we
do not expect numerical analysts to reason about their FORTRAN programs
we should not expect the data-structure  programmer to reason about
his LISP program.

The parallels between mathematics and programming are quite seductive
and should probably be avoided, but mathematics is a well defined
discipline intereseted in correctness of constructions, interested in
description of abstractions, and was a parent of the field of
Computer Science.

Just as the validity of a theorem in mathematics must be based on a convincing
proof, we should expect our programs to be subjected to the same degree
on conviction.

Likewise in mathematics, when an informal proof is suspect, the purveyor
must be able to give a convincing demonstration. Often this can be done in
terms of formal (set-theoretic or FOL) arguments. That is CAN be done
is by no means a guaratee that is  will or should be done.  Seldom  do
the pages of mathematical journals contain formal proofs.  In any case
it is the informal reasoning which predates the formal, and for most
of us, it is the informal "proof" which carries conviction.

It is this degree of "proof" which I beleive must be given priority.
Though foundational work is as improtant here (in programming)
as it is in mathematics --indeed, perhaps more important since
in programming, we still do not fully understand the "rules of 
deduction" for correct program generation-- we must begin to develop a
language, or policy for reasoning about algorithms outside the
domain of informal mathematics.
For example, the foundational work when completed would be able
to demonstrate the correctness of a very convoluted and archaine piece
of code without giving any insight as to whether it is good bad or
indifferent code. Formal techniques would perform their duties
equally well in these cases, still leaving open the problems of
what is good programming style, or how to apply the encoded techniques
elsewhere.  Thus a program which used names "f", "g", and "h" instead
of "is_plus", "is_times" and "is_minus" would be equally acceptible
for the formalism.

Clearly, the area of "aesthetics of programs" is very shakey ground.
But there are simple tests for adequacy: is the program easier to
read, debug, modify, or prove correct than another? 
I believe that it will only be after a better understanding of
the basis for correctness of programs is begun at the informal level,
that a rigorous formalism will be successful.

My arguments are old, but still not well understood by competent 
programmers. Some of the most flagrant violators are LISP programmers;
they should know better, but every LISP programmer (myself included)
uses "car-cdr-cons"  through out the construction of even the highest
level routines. Surely the programmer is thinking about representing a
data structure when he is doing these operations, and just as surely
he must have more mnemonic names. Now the names are not as important as
the fact that using "car-cdr-cons" ties the program to a specific 
representation. It is this latter condition which is unforgiveable.

Thus education about the benefits of abstraction plus the
on-line tools to reinforce the programming style lead to the other
much over-used word: "methodology".

We should not attempt to support ANY programming style just because it
exists, neither should we attempt to legislate a specific methodology because 
it conforms to our current stock of correctness techniques.  What we must do
is to show that better programmers and better programs result from this
style.

There a quite a few popular misconceptions dealing with programming:
1) that programming is easy. As with many everyday situations, it is not
until we attempt a constructive solution that we see the difficulties 
involved.

2) that top-down or bottom-up programming will save us; 
Anyone who has done any amount of programming will recognize the
futility of such dogma. Though the ideas of "programming with
abstraction" lend themselves to top-down programming, it is frequently
the case that we make wrong or poor choices and must  regroup and
try a different tack. 

3) that all we need  do is construct a new language.
This argument is similar to those supporting the development of such 
mamouths as PL/1. The difficulties of progamming are not those of
finding the right constructs; rather they are related to very poor
programming techniques being exercised by the programmers.


I do not believe that any of the above are true. We must learn to express
ourselves betterand develop programming style as well as methodology.
The current approaches to program development stem from a bottom-up
development of the field. 
That is, generation of programming aids proceded from octal coding to 
assembly language to the first Fortran compiler.
The difficulty is that people are still trained in this historical sequence;
thus people are trained to think in terms of representation and the programming
languages which we use are designed to support this bottom-up way of thinking.
That in itself isn't bad as long as we realize that, but we must attempt
to develop techniques for talking at a level of abstraction.
A further difficulty is that programming aids still haven't progressed 
much passed the concepts introduced by Fortran. Certainly embellishments
and improvements have been made, but the basic ideas are the same. 
The only recognizable improvement is the development of sophisticated
display interactiion, allowing programmers to edit and debug their programs
at a greater level of productivity. There  appears, however to be a development
underway which should make a considerable improvement in productivity.
That is the  develpoment of interactive programming systems. Most people
nowadays understand what is mean by an interactive program. What is
currently being advocated in several research institutions is the 
investigation of interactive programming. This is a nice blend of the
techniques made available by displays and the research into program
construction, verification and debugging. In a subsequent section
we will discuss more fully some of the recent research results.

Another parallel with mathematics yields a fruitful line of research.
The behavior of several current proof checkers is to maintain a file
of the steps that the user gave in an interactive session.
Upon completion the final proof can be extracted from the trace.
A similar device should be applicable in interactive programming; this
transcript, when played back gives a documentation of the programming
enterprise. Thus structured programmING in the best sense of the  word.


Relation to other work

Several systems have been built which explore the area of interactive
programming. One of the most recent is described in the thesis
of D. Swinehart. This thesis is also a good survey of related work.
The basic flavor of the work on IP is to give aid to the programmer
after he has constructed his program. Swinehart gives excellent coverage
for interactive debugging and monitoring of running programs.
He recognizes the absolute necessity of sophisticated display
techniques when programming interactively.

We would differ from him in two ways. More superficially, we would
emphasis the construction of algorithm in a specification language
rather than in a particular programming language. More importantly,
our emphasis is on the initial attempt to construct correct
algorithms. To use his terminology, we are interested in developing
"manipulative" techniques in program construction. Thus his editing
is at the "symbolic" level, dealing with  string manipulations of
pieces of program, whereas we see program segments as structural units 
to be manipulated as entities.  We believe that programmers think this 
way and that exploitation of this behavior by a system can measurably
aid the construction of correct algorithms. Thus we believe that a
interactive programming system should become useful at the immediate
level of fornulating the algorithm, not postponed until the debugging
phase. The system should encourage  a systematic style of programming
consistent with the notions of abstraction advocated by Dijkstra
and implicit in the work of McCarthy. 

We see future applications of work like J. Low's in the transformation
of data structures to "more efficient" inplementations. He recognizes
the benefits of  establishing correctness of programs at the highest
possible level of abstraction, consistent with  the structuring
properties of the algorithm. Though he doesn't directly address the 
questions of correctness, certain conclusions are apparent.
For example he discusses the implementations of sets as a data structure.
He notes that the usual implementation is in terms of sequences, or
lists with special operations to effect the "set-hood". He notes that
it should be easier to prove the correctness of the algorithms as
set operations, rather than as special relationships on sequences.
Indeed if we think about the problems of verification of the
"implemented" algorithm, we would note that our intuitions about
the set-algorithm which we wish to express as assertions, would also
have to be translated to relationships involving sequences. Thus
we would have a double source of possible errors: our formulation
of the algorithm in terms sequences could be in error, and our
expression of the assertions over sets could be translated inadequately
into asertions over sequences. N. Suzuki has noticed this behavior
in his discussion of verification of sorting. 
However it appears that he  has only simplified part of the problem;
the algorithm is still encoded at the low-level representation, while
the assertions are expressed in the higher level, with explicit
mappings given to relate the high-level assertions with the low-level
code. It seems much more valuable to address both questions-- construction
and verification-- at the higher level and perform the transformations
to implementations  afterwards.

 We have  performed
similar experiments with tree-sort. Tree-sort is a prime example of
a conceptually clear algorithm becoming totally obscure by including
the representation of the data-structures in the algorithm.
When the representation is abstracted out and the strcutures are
referenced by abstracrat operations, the  behavior of  the algorithm
is quite transparent.

Work like Low's would be an integral part of a large system, but
it seems that the primary area of research should be focussed
on developing the tools for adequate construction of correct algorithms
with questions of efficiency becoming  of secondary importance.


Similarly a system like COPILOT would be of immense help in 
debugging programs, but its treatment of program construction is inadequate.


Snowdon -- poor; good survey but shallow results

Jones ?? -- This  appears to be on the right track.  The formal proof
of a complex algorithm is given stepwise as the structure of the algorithm
is developed. At each level  more and more detail is given and perhaps
more structure is added to the  data -- sets represented as sequences--
as the detail enlarges he  need prove that the hypotheses of the previous
level hold. He at least recognizes the right questions, but must 
read more carefully for more detail. do experiment for machineability!


LCF2


Hansen -- too much syntax to allow reasonable construction; e.g.
user must use "stratification rules"  in the derivation even though
they have no corresponding semantic value. It is however on the
right track.
He's system is driven by selection of syntactic (BNF) rules of a full-fledged
programming language. This leads to difficulties because of excessive
complexities of syntax; therefore the problems of making the syntax
description  unambiguous is non-trivial. This leads to difficulties for
users. C.F. stratification above.  The other failing is the dependence
on syntax alone.  There is an implication (but only an implication) about
the way we should structute our programming. More seriously, there
is nothing said about attempting to use semantics as we build the program.

burstall-darlington-moore  abstract programming
This is again right idea; give a specification language which is sufficiently
rich that we can give natural (machineable) descriptions of data structure
algorithms.

Zilles

Ligler
two pronged attack on reliable programming
 1 education

major difficulty is poorly educated programmers

compare mathematics training  "mathematical maturity"
 a basic understanding of mathematics: numbers, functions, and algorithms.
 a basic understanding of mathematical rasoning
 a knowledege of the tricks and proof techniques useful in specific contexts
    induction, counting args, etc, contradiction,...
 an intuition about when such tools would be applicable

what is analogous in c.s. --- 
 an understanding of proramming language components: data structures, operations
   and  control structures.
 an understanding of valid reasoning in programming: conditions hold a step-
   then application of an operation or control implies  new conditions hold.

 an intuition about how good programming is done: abstraction and stepwise
  (show examples and results in teaching outlining benefits of such )

 2 tools

  a "specification language" of the (presumptious) power of informal mathematics
which will support abstraction and be amenable to (at least)informal reasoning.
    such is necessary for the proper teaching of programming.

a (initially quite) simple programming environment which supports
or enforces this programming style. Examples to show how even a modest
system can aid immensely.

commands for program modification and construction.
we must begin imposing structure on , rather than attempting to
discover it after the fact.

 simple commands for construction, to select schemas and fill in parts
 if we keep a history-file a la lcf we can document our programs
by "playing" back the transcript.

inherent part of this "programming policy" is the ability to
give representations to "constants" of various data types. For example
we never expect the output (A .(B .(C .(D . NIL)))), but rather
(A B C D). Similarly for user defined a.d.s; this will be 
of utmost improtance when executing program. consider the theorem prover,
and the hair involved in interpreting internal represntations of caluses
and other structures.  Thus SDIO is crucial.

why?
must teach people to do "reliable programming" from the beginning.

the current interest in "programming methodology" must be viewed carefully.
atempting to uncover "logically valid" rules of programming is admirable,
but must be kept in prospective. 
Compare mathematics; there we do have well-developed formalisms
for say, first-order formal systems and valid rules of inference are
well known. Indeed, even informal valid reasoning is understood: we can
recognize convincing proofs without formalism. But these areas are not
what we normally include in the subject of "mathematical maturity".
That area is more primitive: "how should i procede to discover a proof --
"what is a reasonable approach to take"..., etc. The art of programming
does note even have this primitive area under control. It is the area 
which Dijkstra is addressing in his papers on structured programming.
But consider all the absurd papers on "Structured xxx" which boil down
to programming without goto, or with block structure. A great many
people still do not understand that what Dijkstra was talking about
was abstraction. Indeed this is what McCarthy's abstract syntax is all about.

Notice that both areas-- education and programming support-- must
be cultivated. Compare theaching a mathematics student first-order
logic without a proper grounding in mathematical fundamentals; he
would not be in a position to apply the tools. We must not condem
programming to a similar fate.  Similarly, if we show the student (or professional)
correct techniques of programming with abstractions, but don't give him
a programming system which reinforces his construction techniques then
our job is only partly complete. To sell such a product we must show that
a user writes better programs with such a system.

A properly constructed specification language has other benefits.
It should be possible to develop proof techniques to help bring
rigor to the informal reasoning process. This task is obviously
quite difficult, but --and perhaps not so obvious-- this task is
secondary to the task of developing informal tools of right reason.
Also these reasonings are meant to be in the informal (meta)
language level, rather than (t)reasonings about programs.
We don't expect numerical analysts to reason about their Fortran programs
we expect that the resoning be carried out at a prior level and
the certified algorithm be transcribible to one of a large set
of programming languages. 
Certainly these reasonings are subject to
errors and certainly the transcriptions can be inaccurate. People
make mistakes. 

Difficulties of current verifcation approaches:
1. assertions are most usually supplied by the writer of the
program. The danger is great that the misconceptions about the
program will spill over to misconceptions about what are  sufficient
assertions. A means of alleviating this  difficulty is to allow
the highest possible representation of the algorithm. Instead of
proving properties of pointer-manipuations, state the algorithm (being
represented by the program) at a higher level and verify properties of that
algorithm.

2. The related problem of just too low-level a representation of
algorithms and  data stuctures in current programming practice.
LISP programs should  never used car-cdr-cons. We should follow the
lead of say numerical analysis, proving properties of algorithms,
encoding those algorithms as programs (with some degree of informal
confidence), and resort to proofs of the implementations only
in those cases where the representation makes a difference.
For example approximation or efficiency considerations may
require more careful analysis due to machine representation; but the
confidence in the basic numerical scheme is established using the
mathematical properties of  the functions, not the represnetations on
a specific machine.

3. The mechanics of getting people to write better programs is ignored.
Verifiability is only one facet of the problem of reliable software.
The technology which will be available in the forseeable future
will not be suffiently powerful to verify non-trivial programs, written
by competent programmers who are not trained in formal methods.
We should begin to attack the questions of how to make better programmers
and as a spin-off from this we should see new techniques for aiding
correctness results.

4. Virtually all verification work has been done on programs which
are know to be correct. The problems of generating a correct program
using verification as an aid are virtually unexplored. Used in
the context of construction, it will be using verfication conditions
as a consistency check against the current representation of the program.
Here we will be faced with which to believe -- the verification conditions
or the program. Since both assertions and program are in the process
of being generated, we will have to be prepared to "debug" both
simultaneously.

5. There is also a certain level of "idealization" involved.
consider  reducttion rule  which ignore undefined or non-termination
Also, as with any correctness proofs, the corretness is "relative"
But that's really all we can ever expect. The difficulties with
verification is whether it is really the best way to proceed.

***on the positive side***

1.  at the most superficial level, the programmer can gain much even
if he only understands the problems of correctness at an informal level.
The awareness of the difficulty of  programming in general, and the
realization of the consequences of the constructions he makes in his
program (as he makes them) should lend sobriety to the whole programming
process.

2. It is not at all clear that the formal techniques are diametrically
opposed to good programming design. Most  experiments have been carried
out on completeed programs, and indeed on programs written in very
specific representations. It appears that (1) program construction
and verification carried out together (2) programming at the highest
level possible, coupled with (3) programming with a stepwise elaboration
all should make the formal verification more reasistic. 

Again, flaunting an analogy with mathematical proofs, we should develop
the informal style of correct programming, but should simultaneously
develop formal techniques, so that if our informal reasoning is questioned
we can  justify these steps just as we are able to justify informal steps
in our mathematical proofs in terms of rules of deduction.
gcd[x;y] <= 
	[x=y  → x;
	 x>y  → gcd[x-y;y]
	 T    → gcd[y;x]
	]

how do you prove correctness?
 note that gcd[x;x] = x
	   gcd[x;y] = gcd[y;x]
	 gcd[x-y;y] = gcd[x;y]


then simplification   should yield gcd[x;y] ≡ gcd[x;y]




INTRODUCTION

In a  nut-shell: Interactive program  construction as an  approach to
reliable  program  construction.   It's  not  verification, automatic
programming, mind-reading, or DWIM. 

Specifically I  have been  experimenting in  teaching  data-structure
programming   (LISP)  using   abstraction  followed   by  elaboration
(step-wise   refinement,  structured  programming,  call-it-what-you-
will...) as the correct style.  It is this  style of specification of
algorithms which lends itself naturally to correct construction. 

It  is my  belief  that this  style of  programming,  reinforced with
interactive facilities  can make  a significant  contribution to  the
construction of  reliable  software.   I would  propose two  things. 
First,  that  we  begin  study  of  the  mechanics of  a  interactive
programming lab which will help a user informally  construct programs
in  this style.  This  would  be a  valuable  experiment testing  the
validity  of the claims for structured  programming and the design of
the system would involve  a reasonably sophisticated piece  of "human
engineering".   Second, I would propose that  we investigate means of
formal semantics sepcification with  correct contruction of  programs
in mind. For example, the verification oriented rules of Hoare can be
used  as the  basis  of command  language for  a  interactive program
constructor. There are other means of specifying semantics  which can
be applied in this manner. Investigation of such techniques should be
investigated. 

If  such techniques are indeed viable  then this approach should lead
to a more reasoned approach to automatic programming.   For then, the
automatic construction of a  program is a machine-applied sequence of
interactive  programming  commands.    Our  philosophy  is  that   of
McCarthy: "In order for a program to be capable of learning something
it must first  be capable of being told it." Thus we would understand
the programming process before  attempting to construct an  automatic
programmer.  Experience  tells us  that  the  programming process  is
poorly understood. 


Later sections will give detailed description of out approach to
program construction, but we will outline the process here.
We beleive that any attempt to influence current programming style
must be tested against accepted practice. Like it or not, we are faced
with an enormous backlog of poorly trained programmers and we must
be able demonstrate that our mehtodologies --regardless of how
valuable they are as theoretical tools-- have demonstrable benefits.
Thus the methodologies must either put a firm foundation on existing
practices, or if changes are advocated, must give convincing demonstration
of their worth. 
For these reasons we put emphasis on informal techniques for constructing
reliable and understandable software. 

For  example when I say  "Now I want to  write a recursive algorithm,
say f[x;y] <=  ..., recursive on y",  that specifies a great  deal of
the structure  of f: the body  of f is a  conditional expression; the
predicates in the conditional are recognizers, determined by the type
of y; the corresponding expressions in the  conditional are yet to be
determined computations.  But as we elaborate these computations, the
information implied  by the  branching guides  us (or  a machine)  in
maintaining low-level consistency checks  on further function calls. 
As  we develop  the algorithm,  we elaborate  the function  calls and
elaborate, or pick representations for, the abstract data structures. 
This makes an excellent  way to present data-structure programming in
the class  room, giving  a representation-independent  presentation. 
One point of this is that this approach is  an admirable way to teach
programming in general  and the "monologue" which the instructor uses
to describe  the  construction  can  be turned  into  commands  to  a
"program constructor". 

There are  some nice  side-effects derived from  this approach.   The
informal  feeling of manageability of  the construction process lends
itself  to more  formal  correctness  proofs of  even  the  partially
specified algorithm.   Also the  "transcript" of the  commands to the
program constructor  gives a  nice documentation  of the  process  of
program  derivation.    This transcript  can  serve  the  program  in
remembering "what  the hell am I doing!?"  or someone else: "what the
hell does he think he's doing!??"


WHY DO IT? 

1.   Reliable programs:  I think  there are  basic  flaws in  current
approaches to  program reliability.   We  don't need  new programming
languages  and forgiving and/or mind-reading systems are "premature".
Research  into  methodology  is  necessary  to  establish  "rules  of
deduction" for  programming, but current  approaches, in verification
particularly, tend to  be much to  formal; we don't  yet even have  a
good handle on maintaing correctness at an informal level. 

2. Thus proper education in programming is  seriously lacking.  It is
here  that a proper "methodology"  must be set.   Reliability must be
recognized as  an  integral part  of  the construction  process  when
designing an algorithm. 




WHAT IS NEEDED.   1. Specification language.  The  recognition of the
necessity of a simple specification language in which we can describe
our  informal  data  structure  algorithms.    LISP  is  the  obvious
candidate. 

2.   Style.    The concerted  effort  to separate  description of  an
algorithm from the coding of  it in a specific programming  language.
The  elaboration of  the  abstract  algorithm and  its  corresponding
abstract data structures leads to a style of programming. 

3.   The development of a programming  environment which supports and
encourages  this  style  of   programming.    Initially,  a   program
construction   language,  with   commands  to   manipulate  partially
constructed  programs, to request proofs where  desired, to do simple
bookkeeping (consistency checking on types, what subfunctions or data
structures are yet to be elaborated,..) 


4. A demonstration of the efficacy of the proposed scheme. It must be
possible to  demonstrate  that people  trained  in this  "style"  are
"better" programmers. 


HOW TO DO IT.  

1.   Structural editing.   At  the most  concrete level  we need  two
"devices".    First an  "editor"  which is  controlled  by structural
requests,  rather  than  by  strings.    Hansen  attempted   such  an
experiment,  but his  scheme of  "selection-not entry"  was much  too
syntactic.   He  was  constructing programs  in a  "real" programming
language with all its syntactic hair. Thus his user frequently had to
specify  application  of  "stratification"  equations  which  had  no
corresponding "semantics".   Relying in a  very simple  specification
language  should  allow  algorithmic  specification   which  is  more
"semantic". 

Also there should be a bias toward our virtuous programming style. We
should be able to manipulate our "abstract machine" (a'  la Dijkstra)
without regard for their underlying representation. 

The mechanics  of such an editing  system clearly require the  use of
display  terminals.   Such behavior  would not  be feasible  on other
devices. 

2. Transcripts. I think the idea of "documentation by transcripts" of
the  transactions which  such a  system can  be exploited  with great
benefit.   It  has  always  seemed  to  me  that  the  two  ideas  of
"structured programming"  are (1)  abstract syntax (data  structures)
and (2) the  process by which the program is written.  Thus programs,
themselves, are really  not structured or  ill-structured; it is  the
"-ing" of "structured programming" that's important. 


3.   Supporting proofs.   Now  to something more  specific.   If this
approach  really has merit, then  we should be  able to reinforce the
intuitive  benefits   of  clean  specification,   with  more   formal
justification if needed or requested.   Just as in mathematics we use
informal reasoning to  give convincing  proofs (to all  but the  most
sceptical),  but if  pressed  for details,  mathematical  foundations
gives us  tools -- rules of  deduction -- which we  can resort to for
aid.  We are beginning to develop similar tools for  programming, and
these tools can be used can  used within the programming system which
I  am advocating.   For example  I have  used the  rules specified in
Igarashi-London-Luckham  (V1-V8   pp.28)  in   guiding  the   correct
construction of a few algorithms -- unification, in particular.  Thus
the formal  verification  and  the  program  construction  can  occur
together, each being used as  a check on the other. This  seems to me
to be  a much more reasonable approach  to correctness, than lacing a
completed program with assertions and presenting it to a verifier. 


4.   Extensions.   Assuming the  validity of  this approach,  several
extensions  or   improvements  come  to   mind.    The   question  of
specification  languages should be investigated. Expansion to include
more  complex  constructs   (for  example,  perhaps  primitives   for
parallelism)  can  be  pursued.    A  more pressing  problem  is  the
systematic translation  of algorithms  designed in  our language,  to
programs in a real  language.  This includes the  obvious problems of
syntactic  differences,   but  also  the  more  complex  problems  of
translation of  data structures  to efficient  representation;  Low's
work is relevant. 



CONCLUSION I  think a case  can be made  for construction of  such an
interactive  programming system. It is modest  in scope but should be
of value to both students and to practitioners.